Quickest Inference of Network Cascades With Noisy Information

نویسندگان

چکیده

We study the problem of estimating source a network cascade given time series noisy information about spread. Initially, there is single vertex affected by (the source) and spreads in discrete steps across network. Although evolution hidden, one observes measurement at each step. Given this information, we aim to reliably estimate as fast possible. investigate Bayesian minimax formulations estimation problem, derive near-optimal estimators for simple dynamics topologies. In setting, samples are taken until error Bayes-optimal estimator falls below threshold. For design novel multi-hypothesis sequential probability ratio test. These optimal require $\log \log n / (k - 1)$ observations notation="LaTeX">$k$ -regular tree network, notation="LaTeX">$(\log n)^{\frac {1}{\ell + 1}}$ notation="LaTeX">$\ell $ -dimensional lattice. then discuss conjectures on general Finally, provide simulations which validate our theoretical results trees lattices, illustrate effectiveness methods sources cascades Erdős-Rényi graphs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Network Topology Inference Using Information Cascades with Limited Statistical Knowledge

We study the problem of inferring network topology from information cascades, in which the amount of time taken for information to diffuse across an edge in the network follows an unknown distribution. Unlike previous studies, which assume knowledge of these distributions, we only require that diffusion along different edges in the network be independent together with limited moment information...

متن کامل

Sidestepping Intractable Inference with Structured Ensemble Cascades

For many structured prediction problems, complex models often require adopting approximate inference techniques such as variational methods or sampling, which generally provide no satisfactory accuracy guarantees. In this work, we propose sidestepping intractable inference altogether by learning ensembles of tractable sub-models as part of a structured prediction cascade. We focus in particular...

متن کامل

Mutual Information in a Linear Noisy Network

A feedforward neural network of a given architecture provides a cod ing of its input data In this work we consider a one layer linear network and we are interested in the network con gurations i e the structure of the synaptic couplings which are able to resolve as many features as possible of the input data distribution under noisy conditions Finding such optimal codings can be useful for both...

متن کامل

Computing with Noisy Information

This paper studies the depth of noisy decision trees in which each node gives the wrong answer with some constant probability. In the noisy Boolean decision tree model, tight bounds are given on the number of queries to input variables required to compute threshold functions, the parity function and symmetric functions. In the noisy comparison tree model, tight bounds are given on the number of...

متن کامل

Minimizing Loss of Information at Competitive PLIP Algorithms for Image Segmentation with Noisy Back Ground

In this paper, two training systems for selecting PLIP parameters have been demonstrated. The first compares the MSE of a high precision result to that of a lower precision approximation in order to minimize loss of information. The second uses EMEE scores to maximize visual appeal and further reduce information loss. It was shown that, in the general case of basic addition, subtraction, or mul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2023

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2022.3220185